90 research outputs found
Key point selection and clustering of swimmer coordination through Sparse Fisher-EM
To answer the existence of optimal swimmer learning/teaching strategies, this
work introduces a two-level clustering in order to analyze temporal dynamics of
motor learning in breaststroke swimming. Each level have been performed through
Sparse Fisher-EM, a unsupervised framework which can be applied efficiently on
large and correlated datasets. The induced sparsity selects key points of the
coordination phase without any prior knowledge.Comment: Presented at ECML/PKDD 2013 Workshop on Machine Learning and Data
Mining for Sports Analytics (MLSA2013
Dilated Spatial Generative Adversarial Networks for Ergodic Image Generation
Generative models have recently received renewed attention as a result of
adversarial learning. Generative adversarial networks consist of samples
generation model and a discrimination model able to distinguish between genuine
and synthetic samples. In combination with convolutional (for the
discriminator) and de-convolutional (for the generator) layers, they are
particularly suitable for image generation, especially of natural scenes.
However, the presence of fully connected layers adds global dependencies in the
generated images. This may lead to high and global variations in the generated
sample for small local variations in the input noise. In this work we propose
to use architec-tures based on fully convolutional networks (including among
others dilated layers), architectures specifically designed to generate
globally ergodic images, that is images without global dependencies. Conducted
experiments reveal that these architectures are well suited for generating
natural textures such as geologic structures
Automatic sensor-based detection and classification of climbing activities
This article presents a method to automatically detect and classify climbing
activities using inertial measurement units (IMUs) attached to the wrists, feet
and pelvis of the climber. The IMUs record limb acceleration and angular
velocity. Detection requires a learning phase with manual annotation to
construct the statistical models used in the cusum algorithm. Full-body
activity is then classified based on the detection of each IMU
Similarity Contrastive Estimation for Image and Video Soft Contrastive Self-Supervised Learning
Contrastive representation learning has proven to be an effective
self-supervised learning method for images and videos. Most successful
approaches are based on Noise Contrastive Estimation (NCE) and use different
views of an instance as positives that should be contrasted with other
instances, called negatives, that are considered as noise. However, several
instances in a dataset are drawn from the same distribution and share
underlying semantic information. A good data representation should contain
relations between the instances, or semantic similarity and dissimilarity, that
contrastive learning harms by considering all negatives as noise. To circumvent
this issue, we propose a novel formulation of contrastive learning using
semantic similarity between instances called Similarity Contrastive Estimation
(SCE). Our training objective is a soft contrastive one that brings the
positives closer and estimates a continuous distribution to push or pull
negative instances based on their learned similarities. We validate empirically
our approach on both image and video representation learning. We show that SCE
performs competitively with the state of the art on the ImageNet linear
evaluation protocol for fewer pretraining epochs and that it generalizes to
several downstream image tasks. We also show that SCE reaches state-of-the-art
results for pretraining video representation and that the learned
representation can generalize to video downstream tasks.Comment: Extended version of our WACV 2023 paper to video self-supervised
learnin
Sparse Probabilistic Classifiers
The scores returned by support vector machines are often used as a confidence measures in the classification of new examples. However, there is no theoretical argument sustaining this practice. Thus, when classification uncertainty has to be assessed, it is safer to resort to classifiers estimating conditional probabilities of class labels. Here, we focus on the ambiguity in the vicinity of the boundary decision. We propose an adaptation of maximum likelihood estimation, instantiated on logistic regression. The model outputs proper conditional probabilities into a user-defined interval and is less precise elsewhere. The model is also sparse, in the sense that few examples contribute to the solution. The computational efficiency is thus improved compared to logistic regression. Furthermore, preliminary experiments show improvements over standard logistic regression and performances similar to support vector machines
Open Set Domain Adaptation using Optimal Transport
We present a 2-step optimal transport approach that performs a mapping from a
source distribution to a target distribution. Here, the target has the
particularity to present new classes not present in the source domain. The
first step of the approach aims at rejecting the samples issued from these new
classes using an optimal transport plan. The second step solves the target
(class ratio) shift still as an optimal transport problem. We develop a dual
approach to solve the optimization problem involved at each step and we prove
that our results outperform recent state-of-the-art performances. We further
apply the approach to the setting where the source and target distributions
present both a label-shift and an increasing covariate (features) shift to show
its robustness.Comment: Accepted at ECML-PKDD 2020, Acknowledgements adde
- …